249 research outputs found

    Silver Standard Masks for Data Augmentation Applied to Deep-Learning-Based Skull-Stripping

    Full text link
    The bottleneck of convolutional neural networks (CNN) for medical imaging is the number of annotated data required for training. Manual segmentation is considered to be the "gold-standard". However, medical imaging datasets with expert manual segmentation are scarce as this step is time-consuming and expensive. We propose in this work the use of what we refer to as silver standard masks for data augmentation in deep-learning-based skull-stripping also known as brain extraction. We generated the silver standard masks using the consensus algorithm Simultaneous Truth and Performance Level Estimation (STAPLE). We evaluated CNN models generated by the silver and gold standard masks. Then, we validated the silver standard masks for CNNs training in one dataset, and showed its generalization to two other datasets. Our results indicated that models generated with silver standard masks are comparable to models generated with gold standard masks and have better generalizability. Moreover, our results also indicate that silver standard masks could be used to augment the input dataset at training stage, reducing the need for manual segmentation at this step

    Iamxt: max-tree toolbox for image processing and analysis

    Get PDF
    The iamxt is an array-based max-tree toolbox implemented in Python using the NumPy library for array processing. It has state of the art methods for building and processing the max-tree, and a large set of visualization tools that allow to view the tree and the contents of its nodes. The array-based programming style and max-tree representation used in the toolbox make it simple to use. The intended audience of this toolbox includes mathematical morphology students and researchers that want to develop research in the field and image processing researchers that need a toolbox simple to use and easy to integrate in their applications68184CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO - CNPQFUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO - FAPESP311228/2014-32013/23514-0; 2013/07559-

    NeuralMind-UNICAMP at 2022 TREC NeuCLIR: Large Boring Rerankers for Cross-lingual Retrieval

    Full text link
    This paper reports on a study of cross-lingual information retrieval (CLIR) using the mT5-XXL reranker on the NeuCLIR track of TREC 2022. Perhaps the biggest contribution of this study is the finding that despite the mT5 model being fine-tuned only on query-document pairs of the same language it proved to be viable for CLIR tasks, where query-document pairs are in different languages, even in the presence of suboptimal first-stage retrieval performance. The results of the study show outstanding performance across all tasks and languages, leading to a high number of winning positions. Finally, this study provides valuable insights into the use of mT5 in CLIR tasks and highlights its potential as a viable solution. For reproduction refer to https://github.com/unicamp-dl/NeuCLIR22-mT

    Open-source tool for Airway Segmentation in Computed Tomography using 2.5D Modified EfficientDet: Contribution to the ATM22 Challenge

    Full text link
    Airway segmentation in computed tomography images can be used to analyze pulmonary diseases, however, manual segmentation is labor intensive and relies on expert knowledge. This manuscript details our contribution to MICCAI's 2022 Airway Tree Modelling challenge, a competition of fully automated methods for airway segmentation. We employed a previously developed deep learning architecture based on a modified EfficientDet (MEDSeg), training from scratch for binary airway segmentation using the provided annotations. Our method achieved 90.72 Dice in internal validation, 95.52 Dice on external validation, and 93.49 Dice in the final test phase, while not being specifically designed or tuned for airway segmentation. Open source code and a pip package for predictions with our model and trained weights are in https://github.com/MICLab-Unicamp/medseg.Comment: Open source code, graphical user interface, and a pip package for predictions with our model and trained weights are in https://github.com/MICLab-Unicamp/medse

    ExaRanker: Explanation-Augmented Neural Ranker

    Full text link
    Recent work has shown that inducing a large language model (LLM) to generate explanations prior to outputting an answer is an effective strategy to improve performance on a wide range of reasoning tasks. In this work, we show that neural rankers also benefit from explanations. We use LLMs such as GPT-3.5 to augment retrieval datasets with explanations and train a sequence-to-sequence ranking model to output a relevance label and an explanation for a given query-document pair. Our model, dubbed ExaRanker, finetuned on a few thousand examples with synthetic explanations performs on par with models finetuned on 3x more examples without explanations. Furthermore, the ExaRanker model incurs no additional computational cost during ranking and allows explanations to be requested on demand
    • 

    corecore